Boosting Model Inversion Attacks with Adversarial Examples
نویسندگان
چکیده
Model inversion attacks involve reconstructing the training data of a target model, which raises serious privacy concerns for machine learning models. However, these attacks, especially learning-based methods, are likely to suffer from low attack accuracy, i.e., classification accuracy reconstructed by classifiers. Recent studies showed an alternative strategy model GAN-based optimization, can improve effectively. series reconstruct only class-representative class, whereas diverse different in each class. Hence, this paper, we propose new paradigm that achieve higher black-box setting. First, regularize process with added semantic loss function and, second, inject adversarial examples into increase diversity class-related parts (i.e., he essential features tasks) data. This scheme guides pay more attention original during reconstruction process. The experimental results show our method greatly boosts performance existing attacks. Even when no extra queries allowed, approach still shows severity threat adversaries is underestimated and robust defenses required.
منابع مشابه
Boosting Adversarial Attacks with Momentum
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of the existing adversarial attacks can only fool a black-box model with a low success rate because...
متن کاملGenerating Adversarial Examples with Adversarial Networks
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires mor...
متن کاملDelving into Transferable Adversarial Examples and Black-box Attacks
An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferabilit...
متن کاملAudio Adversarial Examples: Targeted Attacks on Speech-to-Text
We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (recognizing up to 50 characters per second of audio). We apply our white-box iterative optimization-based attack to Mozilla’s implementation DeepSpeech end-to-end, and show it has a 100% success ra...
متن کاملAdversarial Examples: Attacks and Defenses for Deep Learning
With rapid progress and great successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Dependable and Secure Computing
سال: 2023
ISSN: ['1941-0018', '1545-5971', '2160-9209']
DOI: https://doi.org/10.1109/tdsc.2023.3285015